Introduction to Open Data Science - Course Project

# This is a so-called "R chunk" where you can write R code.

date()
## [1] "Sun Dec 12 23:05:33 2021"

I am interested in this course because of my research and I am expecting that this course will be a strong introduction to data science. The course should give me enough knowledge to be able to use data science process in my research.

I learned about the course via an email sent to me by my department.

My GitHub repository can be found here.

Here is my course diary web page.


For this analysis, a data frame named learningAnalysis2014 is created. The CSV file named “learning2014.csv” is read. The data frame consist of 7 variables (gender ,Age ,attitude, deep, stra, surf, Points) and 166 observations. The data are from a survey of statistics students. The data include the global attitude of the students toward statistics and their exam points. deep”, “stra” and “surf” are combined variables by taking the mean. “attitude” was scaled based on Likert scale (1-5) by dividing the “Attitude” column by 10.

More information about the data can be found here (https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS3-meta.txt)

learningAnalysis2014 <- read.csv(file = 'data/learning2014.csv')
dim(learningAnalysis2014)
## [1] 166   7
str(learningAnalysis2014)
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...

These are plots of all the relationship among the variables. From the visualization, we see some positive and negative correlations. An interesting correlation is attitude and points. As expected, we see a negative correlation with surf and deep. The most negative correlation is with deep and points. We also see that there are more female than male students. However, from the plots, there is no good fit between gender and points and there is no strong correlations.

The summary table gives us more information about the means of the variables.

pairs(learningAnalysis2014[-1], col = "red")

library(ggplot2)

library(GGally)
## Registered S3 method overwritten by 'GGally':
##   method from   
##   +.gg   ggplot2
ggpairs(learningAnalysis2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))

summary(learningAnalysis2014)
##     gender               Age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           Points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

I am using the following 3 variables to explain Points: ‘attitude’, ‘stra’, and ‘deep’.

ggpairs(learningAnalysis2014, lower = list(combo = wrap("facethist", bins = 20)))

my_model <- lm(Points ~ attitude + stra + deep, data = learningAnalysis2014)

summary(my_model)
## 
## Call:
## lm(formula = Points ~ attitude + stra + deep, data = learningAnalysis2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.5239  -3.4276   0.5474   3.8220  11.5112 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.3915     3.4077   3.343  0.00103 ** 
## attitude      3.5254     0.5683   6.203 4.44e-09 ***
## stra          0.9621     0.5367   1.793  0.07489 .  
## deep         -0.7492     0.7507  -0.998  0.31974    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 162 degrees of freedom
## Multiple R-squared:  0.2097, Adjusted R-squared:  0.195 
## F-statistic: 14.33 on 3 and 162 DF,  p-value: 2.521e-08

From the above summary table, we see that median for the residual is 0.5474. This would suggest that it will be difficult to predict points based on attitude, stra, and deep. However, it is very difficult to predict human behaviors and 0.5 residual could be acceptable in this case.

From the coefficients table, we see that the p-value for deep is high which would mean that deep does not affect much points. stra and attitude are better to predict points.

Below is a new regression where I have removed deep.

my_model2 <- lm(Points ~ attitude + stra, data = learningAnalysis2014)

summary(my_model2)
## 
## Call:
## lm(formula = Points ~ attitude + stra, data = learningAnalysis2014)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

Based on the above summary table, we see that by removing deep, the fit was not improved and actually got worse.

Below are diagnostic plots function: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage. The QQ-plot shows a reasonable fit which shows good ‘normality’. The Residuals vs Fitted values plot shows that it is reasonable since it shows randomness. The Residuals vs Leverage plot shows regular error which would imply regular leverage.

my_model2 <- lm(Points ~ attitude + stra, data = learningAnalysis2014)

par(mfrow = c(2,2))
plot(my_model2, which = c(1,2,5))


Logistic regression

Read new data frame named alc.csv to studentAlc and print variable names.

#studentAlc <- read.table("~/IODS-project/IODS-project/data/pormath.csv", sep = ",", header = TRUE)

studentAlc <- read.table("~/IODS-project/IODS-project/data/alc.csv", sep = ",", header = TRUE)

dim(studentAlc)
## [1] 370  35
colnames(studentAlc)
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "guardian"   "traveltime" "studytime"  "schoolsup" 
## [16] "famsup"     "activities" "nursery"    "higher"     "internet"  
## [21] "romantic"   "famrel"     "freetime"   "goout"      "Dalc"      
## [26] "Walc"       "health"     "failures"   "paid"       "absences"  
## [31] "G1"         "G2"         "G3"         "alc_use"    "high_use"

Data Set Information:

“This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). In [Cortez and Silva, 2008], the two datasets were modeled under binary/five-level classification and regression tasks. Important note: the target attribute G3 has a strong correlation with attributes G2 and G1. This occurs because G3 is the final year grade (issued at the 3rd period), while G1 and G2 correspond to the 1st and 2nd period grades. It is more difficult to predict G3 without G2 and G1, but such prediction is much more useful (see paper source for more details).”

The above information is from UCI Machine Learning Repository.

More information about the data sets can be found here:(https://archive.ics.uci.edu/ml/datasets/Student+Performance)

I chose the following 4 variables to predict high use of alcohol: G3, absences, Pstatus, and health. I have chosen these variables thinking that they would be easily available to a school without the need to survey students.

G3:

My assumption here is that low grade is an indication of high alcohol use since high use of alcohol can affect cognitive functions such as memory.

absences:

High number of absences could be an indication of high alcohol use as it would affect one’s schedule. Absenses could be due to being sick after consuming a lot of alcohol.

goout

Going out with friend a lot might raise alcohol consumption since there will be more opportunities to consume alcohol.

health:

Poor health could be an indication of high alcohol consumption. Alcohol can negatively affect physical and mental health.

These are plots of all variables

# access the tidyverse libraries tidyr, dplyr, ggplot2
library(tidyr); library(dplyr); library(ggplot2)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
# draw a bar plot of each variable
gather(studentAlc) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

Box plots of my chosen variables vs high-use of alcohol

# initialize a plot of high_use and G3
g1 <- ggplot(studentAlc, aes(x = high_use, y = G3, col = sex))

# define the plot as a boxplot and draw it
g1 + geom_boxplot() + ylab("grade") + ggtitle("Student final grade by alcohol consumption and sex")

# initialise a plot of high_use and absences
g2 <- ggplot(studentAlc, aes(x = high_use, y = absences, col = sex))

# define the plot as a boxplot and draw it
g2 + geom_boxplot() + ggtitle("Student absences by alcohol consumption and sex")

# initialize a plot of high_use and 
g3 <- ggplot(studentAlc, aes(x = high_use, y = goout, col = sex))

# define the plot as a boxplot and draw it
g3 + geom_boxplot() + ylab("going out") + ggtitle("Student going out with friends by alcohol consumption and sex")

g4 <- ggplot(studentAlc, aes(x = high_use, y = health, col = sex))

# define the plot as a boxplot and draw it
g4 + geom_boxplot() + ylab("health") + ggtitle("Student health by alcohol consumption and sex")

Numerical exploration of my chosen variables

# produce summary statistics by grade
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_grade = mean(G3))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_grade
##   <chr> <lgl>    <int>      <dbl>
## 1 F     FALSE      154       11.4
## 2 F     TRUE        41       11.8
## 3 M     FALSE      105       12.3
## 4 M     TRUE        70       10.3
# produce summary statistics by absences
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_absences = mean(absences))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_absences
##   <chr> <lgl>    <int>         <dbl>
## 1 F     FALSE      154          4.25
## 2 F     TRUE        41          6.85
## 3 M     FALSE      105          2.91
## 4 M     TRUE        70          6.1
# produce summary statistics by going out
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_going_out = mean(goout))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_going_out
##   <chr> <lgl>    <int>          <dbl>
## 1 F     FALSE      154           2.95
## 2 F     TRUE        41           3.39
## 3 M     FALSE      105           2.70
## 4 M     TRUE        70           3.93
# produce summary statistics by health
studentAlc %>% group_by(sex, high_use) %>% summarise(count = n(), mean_health = mean(health))
## `summarise()` has grouped output by 'sex'. You can override using the `.groups` argument.
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_health
##   <chr> <lgl>    <int>       <dbl>
## 1 F     FALSE      154        3.37
## 2 F     TRUE        41        3.39
## 3 M     FALSE      105        3.67
## 4 M     TRUE        70        3.93

Interpretation of the above plots and tables

We see that grades are lower for male when high_use of alcohol is true. Female seems to be less negatively affected. Final grade is not as a strong predictor as I thought. Absences and high_use of alcohol seems to have a good correlation. Female seems to have a little more absences than male when high_use of alcohol is true. going out shows some correlation for male and female for high_use of alcohol. The more a student go out the more he or she consume alcohol. Health does not seem to have a strong correlation with high_use of alcohol. I would have thought that there would have been a stronger correlation. It is possible that the participants did not truthfully answer this question or that participants are not aware of their overall health (mental and physical). This seems to be especially true for male. Female have a broader range of answers with 25% of them describing their health below 2 (Q1 high_use = true).

Logistic regression to statistically explore the relationship between my chosen variables

For this model, high_use is the target variable and final grade, absences, going out, and health are the predictors. I did not include sex because if a school needs to predict high_use of alcohol, sex might add an unnecessary bias. For instance, male students might be watched more carefully than female students because male seems to have higher high_use of alcohol.

# find the model with glm()
m <- glm(high_use ~ G3 + absences + goout + health, data = studentAlc, family = "binomial")

# print out a summary of the model
summary(m)
## 
## Call:
## glm(formula = high_use ~ G3 + absences + goout + health, family = "binomial", 
##     data = studentAlc)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -1.8342  -0.7505  -0.5508   0.9357   2.3172  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -3.70363    0.79076  -4.684 2.82e-06 ***
## G3          -0.03852    0.03935  -0.979 0.327587    
## absences     0.07436    0.02212   3.362 0.000773 ***
## goout        0.72459    0.11941   6.068 1.29e-09 ***
## health       0.15179    0.09209   1.648 0.099275 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 386.07  on 365  degrees of freedom
## AIC: 396.07
## 
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m)
## (Intercept)          G3    absences       goout      health 
## -3.70363174 -0.03852050  0.07435996  0.72459398  0.15179099
# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR       2.5 %    97.5 %
## (Intercept) 0.0246339 0.004960425 0.1109072
## G3          0.9622120 0.890726272 1.0397748
## absences    1.0771945 1.033123265 1.1280521
## goout       2.0638929 1.642693029 2.6260422
## health      1.1639169 0.973629481 1.3981834

Interpretation of the above model

From the model P-values, we see that final absences and going out (goout) are likely relevant variables to explain high_use of alcohol. However, final grade and health show that the data are providing little evidence that these variables are needed to explain high_use.

From the Odd Ratio table, we see that absences, going out, and health have an OR greater than 1. This would imply that these variables are positively associated with high_use of alcohol. Going out has a Odd Ratio of 2 showing a very strong positive association with high_use of alcohol. Final grade is close to 1 as well. This would imply that the positive association is not as strong as the other variables.

It seems that most of my chosen variables have some positive association with high_use. Therefore, they could be used to correctly predict high_use.

Predictive power of my model

# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'studentAlc'
studentAlc <- mutate(studentAlc, probability = probabilities)

# use the probabilities to make a prediction of high_use
studentAlc <- mutate(studentAlc, prediction = probability > 0.5)

# see the last ten original classes, predicted probabilities, and class predictions
select(studentAlc, G3, absences, goout, health, high_use, probability, prediction) %>% tail(10)
##     G3 absences goout health high_use probability prediction
## 361  2        7     3      3     TRUE  0.34728425      FALSE
## 362 11        3     3      3     TRUE  0.21838164      FALSE
## 363 10        2     1      5     TRUE  0.07895958      FALSE
## 364 16        4     4      2     TRUE  0.30564438      FALSE
## 365 12        3     2      3    FALSE  0.11524638      FALSE
## 366  8        4     3      3     TRUE  0.25252304      FALSE
## 367 14        0     2      5    FALSE  0.11559977      FALSE
## 368  9        4     4      5     TRUE  0.47613178      FALSE
## 369 10        8     4      2     TRUE  0.42751451      FALSE
## 370  0        0     2      5    FALSE  0.18309931      FALSE
# tabulate the target variable versus the predictions
table(high_use = studentAlc$high_use, prediction = studentAlc$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   233   26
##    TRUE     65   46
# initialize a plot of 'high_use' versus 'probability' in 'studentAlc'
g <- ggplot(studentAlc, aes(x = probability, y = high_use, col = prediction))

# define the geom as points and draw the plot
g + geom_point()

# tabulate the target variable versus the predictions
table(high_use = studentAlc$high_use, prediction = studentAlc$prediction) %>% prop.table %>% addmargins
##         prediction
## high_use      FALSE       TRUE        Sum
##    FALSE 0.62972973 0.07027027 0.70000000
##    TRUE  0.17567568 0.12432432 0.30000000
##    Sum   0.80540541 0.19459459 1.00000000
# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = studentAlc$high_use, prob = studentAlc$probability)
## [1] 0.2459459

The goal of a loss function is to get a small number as possible. The loss function here is around 0.25. Therefore, the model does not predict correctly 25% of the time. The model perform better than the simple guessing strategy.

10-fold cross-validation

Bonus

# K-fold cross-validation
library(boot)
cv <- cv.glm(data = studentAlc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2567568

The result of the cross-validation (K=10) is similar to the previous result of the loss function. My model seems to produce a similar result as the model found in DataCamp.

Super Bonus

Many variables

# New model with many variables
m <- glm(high_use ~ G3 + absences + goout + health + studytime + failures + freetime + famrel, data = studentAlc, family = "binomial")

# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                    OR      2.5 %    97.5 %
## (Intercept) 0.1112482 0.01462431 0.7969864
## G3          1.0004152 0.91884143 1.0909553
## absences    1.0707813 1.02620258 1.1197948
## goout       2.0410506 1.59736372 2.6461037
## health      1.1744770 0.97382116 1.4240680
## studytime   0.6138411 0.43227534 0.8559677
## failures    1.3512750 0.85082981 2.1704783
## freetime    1.1869927 0.89658690 1.5765989
## famrel      0.6655703 0.49995249 0.8817370
# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'studentAlc'
studentAlc <- mutate(studentAlc, probability = probabilities)

# use the probabilities to make a prediction of high_use
studentAlc <- mutate(studentAlc, prediction = probability > 0.5)

# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = studentAlc$high_use, prob = studentAlc$probability)
## [1] 0.2324324
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = studentAlc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2513514

New model with 2 variable only

# New model with 2 variables
m <- glm(high_use ~ absences + goout, data = studentAlc, family = "binomial")

# compute odds ratios (OR)
OR <- coef(m) %>% exp

# compute confidence intervals (CI)
CI <- confint(m) %>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                     OR      2.5 %     97.5 %
## (Intercept) 0.02644199 0.01082308 0.06045431
## absences    1.07930582 1.03477610 1.13034717
## goout       2.08298255 1.66258410 2.64170298
# predict() the probability of high_use
probabilities <- predict(m, type = "response")

# add the predicted probabilities to 'studentAlc'
studentAlc <- mutate(studentAlc, probability = probabilities)

# use the probabilities to make a prediction of high_use
studentAlc <- mutate(studentAlc, prediction = probability > 0.5)

# define a loss function (average prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = studentAlc$high_use, prob = studentAlc$probability)
## [1] 0.2378378
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = studentAlc, cost = loss_func, glmfit = m, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2405405

It seems that the result is not greatly improved when going from many variables to only 2 variables.


Chapter 4: Clustering and Classification

Part 2: explore the structure and the dimensions of the data

Loading the Boston data.

# set plots size
knitr::opts_chunk$set(fig.width=16, fig.height=10) 


#code from DataCamp.

#access the MASS package
library (dplyr)
library(MASS)
library(corrplot)
library(tidyr)


#load the data
data("Boston")

#explore the dataset
str(Boston)
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506  14
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Description:

The Boston data set represent the housing values in Suburbs of Boston.

The Boston data frame has 506 rows and 14 columns.

This data frame contains the following columns:

crim = per capita crime rate by town
zn = proportion of residential land zoned for lots over 25,000 sq.ft
indus = proportion of non-retail business acres per town
chas = Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
nox = nitrogen oxides concentration (parts per 10 million)
rm = average number of rooms per dwelling
age = proportion of owner-occupied units built prior to 1940
dis = weighted mean of distances to five Boston employment centres
rad = index of accessibility to radial highways
tax = full-value property-tax rate per $10,000
ptratio = pupil-teacher ratio by town
black = 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
lstat = lower status of the population (percent)
medv = median value of owner-occupied homes in $1000s

Source:

Harrison, D. and Rubinfeld, D.L. (1978) Hedonic prices and the demand for clean air. J. Environ. Economics and Management 5, 81–102.

Belsley D.A., Kuh, E. and Welsch, R.E. (1980) Regression Diagnostics. Identifying Influential Data and Sources of Collinearity. New York: Wiley.

Part 3: graphical overview of the data

#plot matrix of the variables
pairs(Boston, gap=1/30)

# MASS, corrplot, tidyr and Boston dataset are available

# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits = 2)

# print the correlation matrix
cor_matrix
##          crim    zn indus  chas   nox    rm   age   dis   rad   tax ptratio
## crim     1.00 -0.20  0.41 -0.06  0.42 -0.22  0.35 -0.38  0.63  0.58    0.29
## zn      -0.20  1.00 -0.53 -0.04 -0.52  0.31 -0.57  0.66 -0.31 -0.31   -0.39
## indus    0.41 -0.53  1.00  0.06  0.76 -0.39  0.64 -0.71  0.60  0.72    0.38
## chas    -0.06 -0.04  0.06  1.00  0.09  0.09  0.09 -0.10 -0.01 -0.04   -0.12
## nox      0.42 -0.52  0.76  0.09  1.00 -0.30  0.73 -0.77  0.61  0.67    0.19
## rm      -0.22  0.31 -0.39  0.09 -0.30  1.00 -0.24  0.21 -0.21 -0.29   -0.36
## age      0.35 -0.57  0.64  0.09  0.73 -0.24  1.00 -0.75  0.46  0.51    0.26
## dis     -0.38  0.66 -0.71 -0.10 -0.77  0.21 -0.75  1.00 -0.49 -0.53   -0.23
## rad      0.63 -0.31  0.60 -0.01  0.61 -0.21  0.46 -0.49  1.00  0.91    0.46
## tax      0.58 -0.31  0.72 -0.04  0.67 -0.29  0.51 -0.53  0.91  1.00    0.46
## ptratio  0.29 -0.39  0.38 -0.12  0.19 -0.36  0.26 -0.23  0.46  0.46    1.00
## black   -0.39  0.18 -0.36  0.05 -0.38  0.13 -0.27  0.29 -0.44 -0.44   -0.18
## lstat    0.46 -0.41  0.60 -0.05  0.59 -0.61  0.60 -0.50  0.49  0.54    0.37
## medv    -0.39  0.36 -0.48  0.18 -0.43  0.70 -0.38  0.25 -0.38 -0.47   -0.51
##         black lstat  medv
## crim    -0.39  0.46 -0.39
## zn       0.18 -0.41  0.36
## indus   -0.36  0.60 -0.48
## chas     0.05 -0.05  0.18
## nox     -0.38  0.59 -0.43
## rm       0.13 -0.61  0.70
## age     -0.27  0.60 -0.38
## dis      0.29 -0.50  0.25
## rad     -0.44  0.49 -0.38
## tax     -0.44  0.54 -0.47
## ptratio -0.18  0.37 -0.51
## black    1.00 -0.37  0.33
## lstat   -0.37  1.00 -0.74
## medv     0.33 -0.74  1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex = 0.6)

corrplot(cor_matrix, method = 'number', type="upper")

Looking at the data, we see that medv has a positive correlation with rm and a negative correlation with lstat. This make sense as a house would have a higher price based on the number of rooms. As well, house located in a lower status population area would have a lower price. We also see that nox has a positive correlation with indus and age. nox has a negative correlation with dis. The higher the number of industry, the higher the emission of nitrogen oxides. Older houses would probably be mostly in old industrial areas of the city. medv has a positive correlation with crim but only at 0.33. At first glance, the value of houses is mainly due to th e number of rooms per dwelling. Taxes has a small negative correlation with medv.

Part 4: standardize the dataset

During scaling of the data, the mean is subtracted from the column and the difference is divided by the standard deviation. This is an example of the data before and after scaling.
Before:
row 1: rm = 6.575
After:
row 1: rm = 0.413262920

# center and standardize variables
boston_scaled <- scale(Boston)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

Use the quantiles as the break points in the categorical variable and divide the dataset to train and test sets

# summary of the scaled crime rate
summary(boston_scaled$crim)
##      Min.   1st Qu.    Median      Mean   3rd Qu.      Max. 
## -0.419367 -0.410563 -0.390280  0.000000  0.007389  9.924110
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
##           0%          25%          50%          75%         100% 
## -0.419366929 -0.410563278 -0.390280295  0.007389247  9.924109610
# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]
# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

Now, 80% of the data belongs to the train set. Saved the crime categories from the test set and removed the categorical crime variable from the test dataset.

Part 5: linear discriminant analysis

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   med_low  med_high      high 
## 0.2574257 0.2524752 0.2475248 0.2425743 
## 
## Group means:
##                  zn       indus        chas        nox         rm        age
## low       0.9204904 -0.91193584 -0.12090214 -0.8782791  0.4716162 -0.8914111
## med_low  -0.1749097 -0.22715264 -0.04073494 -0.5424073 -0.2011207 -0.3333607
## med_high -0.3749021  0.08343905  0.20012296  0.3259898  0.1504028  0.4280137
## high     -0.4872402  1.01719597 -0.07145661  1.0780091 -0.3975976  0.8214925
##                 dis        rad        tax     ptratio       black      lstat
## low       0.8863868 -0.6892330 -0.7626524 -0.46090827  0.37941206 -0.7820678
## med_low   0.3239559 -0.5607666 -0.4664819 -0.01251866  0.35867821 -0.1022016
## med_high -0.3589343 -0.4260132 -0.3572338 -0.23581822  0.06956764  0.0109559
## high     -0.8683550  1.6373367  1.5134896  0.77985517 -0.84015445  0.9076136
##                medv
## low       0.5525236
## med_low  -0.0406631
## med_high  0.1520254
## high     -0.7451492
## 
## Coefficients of linear discriminants:
##                 LD1         LD2         LD3
## zn       0.09291863  0.69985636 -0.87796705
## indus    0.03053132 -0.12468573  0.41899216
## chas    -0.10201193 -0.08250128  0.01816864
## nox      0.39531543 -0.67513639 -1.32834260
## rm      -0.12218291 -0.06127500 -0.23682516
## age      0.19751937 -0.46873738 -0.20225462
## dis     -0.05492489 -0.21359549  0.20455652
## rad      3.36773523  0.90989444 -0.08874001
## tax      0.02342119  0.03991802  0.61440283
## ptratio  0.09227121  0.01644471 -0.24956937
## black   -0.11449254  0.02101592  0.18057073
## lstat    0.23164876 -0.23280037  0.40508204
## medv     0.17679314 -0.30905705 -0.05937904
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9511 0.0350 0.0139
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

Part 6: Save the crime categories

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
##           predicted
## correct    low med_low med_high high
##   low       15       5        3    0
##   med_low   11       7        6    0
##   med_high   1       4       19    2
##   high       0       0        0   29

The best predictions are for high. Also, the model does not predict very well med_high. We can see in the LD plot that med_high and med_low are in the same side. The model predicts high correctly because the distance between high and low, med_low, and med_high is large.

Part 7: Reload the Boston dataset and standardize the dataset

# load MASS and Boston
library(MASS)
data('Boston')

# scale data
Boston = as.data.frame(scale(Boston))

# euclidean distance matrix
dist_eu <- dist(Boston)

# look at the summary of the distances
summary(dist_eu)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(Boston, method = 'manhattan')

# look at the summary of the distances
summary(dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618

cluster with centers = 3

# k-means clustering
km <-kmeans(Boston, centers = 3)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

cluster with centers = 1

# k-means clustering
km <-kmeans(Boston, centers = 1)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

cluster with centers = 2

# k-means clustering
km <-kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston[6:10], col = km$cluster)

The cluster with centers = 3 seems to be the best.

Bonus and super bonus

library(ggplot2)
# Boston dataset is available
set.seed(123)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering
km <-kmeans(Boston, centers = 2)

# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)

model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)


#install.packages("plotly")
library(plotly)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')

Chapter 5: Dimensionality reduction techniques

Part 1: Show a graphical overview of the data and show summaries of the variables in the data

# set plots size
knitr::opts_chunk$set(fig.width=16, fig.height=10) 

# uploading dataset from: http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt
human <- read.csv("http://s3.amazonaws.com/assets.datacamp.com/production/course_2218/datasets/human2.txt", sep = ",",  header = TRUE)

# code is from DataCamp.
# library
library(GGally)
library(corrplot)

# visualize the 'human_' variables
ggpairs(human)

# compute the correlation matrix and visualize it with corrplot
cor(human) %>% corrplot

# summary of the data
summary(human)
##     Edu2.FM          Labo.FM          Edu.Exp         Life.Exp    
##  Min.   :0.1717   Min.   :0.1857   Min.   : 5.40   Min.   :49.00  
##  1st Qu.:0.7264   1st Qu.:0.5984   1st Qu.:11.25   1st Qu.:66.30  
##  Median :0.9375   Median :0.7535   Median :13.50   Median :74.20  
##  Mean   :0.8529   Mean   :0.7074   Mean   :13.18   Mean   :71.65  
##  3rd Qu.:0.9968   3rd Qu.:0.8535   3rd Qu.:15.20   3rd Qu.:77.25  
##  Max.   :1.4967   Max.   :1.0380   Max.   :20.20   Max.   :83.50  
##       GNI            Mat.Mor         Ado.Birth         Parli.F     
##  Min.   :   581   Min.   :   1.0   Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5   1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0   Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1   Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0   3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0   Max.   :204.80   Max.   :57.50

Interpretation:

From the above graphs, we see that we have normal, binomial distributions. We also see some strong negative and positve correlations. For instance, Ado.Birth as a positive correlation with Mat.Mor (0.759). As well, Mat.Mor and Life.Exp have a negative correlation (-0.857). Another interesting correlation is the positive correlation between Life.Exp and Edu.Exp (0.789). There is a positive correlation between GNI and Life.Exp (0.627) but it is lower than I would have expected. Life.Exp does increase in countries with high GNI but not much. There is a negative correlation between Ado.Birth and Edu.EXp. This would suggest that the two variables are moving in opposite direction.

Part 2: Perform principal component analysis (PCA) on the not standardized human data.

# code is from DataCamp

# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human)

# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("red", "blue"))

Part 3: Standardize the variables in the human data and repeat the above analysis.

# code is from DataCamp

# standardize the variables
human_std <- scale(human)

# print out summaries of the standardized variables
summary(human_std)
##     Edu2.FM           Labo.FM           Edu.Exp           Life.Exp      
##  Min.   :-2.8189   Min.   :-2.6247   Min.   :-2.7378   Min.   :-2.7188  
##  1st Qu.:-0.5233   1st Qu.:-0.5484   1st Qu.:-0.6782   1st Qu.:-0.6425  
##  Median : 0.3503   Median : 0.2316   Median : 0.1140   Median : 0.3056  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5958   3rd Qu.: 0.7350   3rd Qu.: 0.7126   3rd Qu.: 0.6717  
##  Max.   : 2.6646   Max.   : 1.6632   Max.   : 2.4730   Max.   : 1.4218  
##       GNI             Mat.Mor          Ado.Birth          Parli.F       
##  Min.   :-0.9193   Min.   :-0.6992   Min.   :-1.1325   Min.   :-1.8203  
##  1st Qu.:-0.7243   1st Qu.:-0.6496   1st Qu.:-0.8394   1st Qu.:-0.7409  
##  Median :-0.3013   Median :-0.4726   Median :-0.3298   Median :-0.1403  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.3712   3rd Qu.: 0.1932   3rd Qu.: 0.6030   3rd Qu.: 0.6127  
##  Max.   : 5.6890   Max.   : 4.4899   Max.   : 3.8344   Max.   : 3.1850
# perform principal component analysis (with the SVD method)
pca_human <- prcomp(human_std)

# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex = c(0.8, 1), col = c("red", "blue"))

# create and print out a summary of pca_human
s <- summary(pca_human)
s
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6     PC7
## Standard deviation     2.0708 1.1397 0.87505 0.77886 0.66196 0.53631 0.45900
## Proportion of Variance 0.5361 0.1624 0.09571 0.07583 0.05477 0.03595 0.02634
## Cumulative Proportion  0.5361 0.6984 0.79413 0.86996 0.92473 0.96069 0.98702
##                            PC8
## Standard deviation     0.32224
## Proportion of Variance 0.01298
## Cumulative Proportion  1.00000
# rounded percentages of variance captured by each PC
pca_pr <- round(100*s$importance[2,], digits = 1) 

# print out the percentages of variance
pca_pr
##  PC1  PC2  PC3  PC4  PC5  PC6  PC7  PC8 
## 53.6 16.2  9.6  7.6  5.5  3.6  2.6  1.3
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")

# draw a biplot
biplot(pca_human, cex = c(0.8, 1), col = c("red", "blue"), xlab = pc_lab[1], ylab = pc_lab[2])

Interpretation:

From the above tables and graphs, we see that the non-standardized data are difficult to interpret. The biplot graph shows most of the country to be on the top right corner and the arrow to have a 0 length. The standardized PCA biplot shows that most of the countries are in the middle of the graph. We see that arrows of Mat.Mor and Ado.Birth have a small angle (meaning that they are correlated). They have a positive correlation and are more explained by PC1. GNI and Edu2.FM are strongly correlated and are explained by PC2. Overall, the angles between arrows show correlations. The smaller the angle the stronger the correlation.

Part 4: personal interpretations of the first two principal component dimensions based on the biplot drawn after PCA on the standardized human data

Interpretation:

PC1 captures 53.6% of total variance in the original variables. PC2 captures 16.2% of variability. We see that Ado.Birth, Life.Exp, Edue.FM, Mat.Mor, and GNI have a small angle with PC1. This means that these variables have a high positive correlation with PC1. The same is true with PC2 and Labo.FM and Parli.F. The length of the arrows are proportional to the standard deviations of the features. We see that Mat.Mor, Labo.FM, and Life.Exp have the longest arrows. Variables pointing in the direction of PC1 are contributing to that PC1 dimension. The opposite is true for PC2.

Part 5: Load the tea dataset from the package Factominer

# code is from DataCamp

library(dplyr) 
library(ggplot2)
library(tidyr)
library(FactoMineR)

data("tea")

dim(tea)
## [1] 300  36
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

# select the 'keep_columns' to create a new dataset
tea_time <- select(tea, one_of(keep_columns))

# look at the summaries and structure of the data
summary(tea_time)
##         Tea         How                      how           sugar    
##  black    : 74   alone:195   tea bag           :170   No.sugar:155  
##  Earl Grey:193   lemon: 33   tea bag+unpackaged: 94   sugar   :145  
##  green    : 33   milk : 63   unpackaged        : 36                 
##                  other:  9                                          
##                   where           lunch    
##  chain store         :192   lunch    : 44  
##  chain store+tea shop: 78   Not.lunch:256  
##  tea shop            : 30                  
## 
str(tea_time)
## 'data.frame':    300 obs. of  6 variables:
##  $ Tea  : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How  : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ how  : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
# visualize the dataset
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))

# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali", graph.type = "classic")

Interpretation:

The Tea dataset relates to the consumption of tea and how, when, and where it is consumed. There are 300 observations and 36 variables. We see that Earl Grey is the favority type of tea and that it is mainly consumed alone. The tea is mainly bought in chain store and consumed before or after lunch (not much during lunch). The tea is from tea bags and there is an equal use or not of sugar with it. We see that milk, Earl Grey, sugar, tea bags, and chain store are in the direction of Dim2. This group is in the middle left side of the plot. The other features are more dispersed and are in the right side of the plot. Since these features are closer to the origin, we can say that more people use them. For instance, Earl Grey vs black tea. Early Grey is closer to the origin and therefore, more people use Early Grey than black tea. This is more so with milk and lemon.


Chapter 6: Analysis of longitudinal data

Part 1: Implement the analyses of Chapter 8 of MABS

RATS dataset

# most of the code is from DataCamp

# set plots size
knitr::opts_chunk$set(fig.width=16, fig.height=10)

# Loading RATS dataset

# libraries
library(dplyr)
library(tidyr)

# load rats data
RATS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/rats.txt", sep = "\t", header = TRUE)

# Factor variables ID and Group
RATS$ID <- factor(RATS$ID)
RATS$Group <- factor(RATS$Group)

RATS
##    ID Group WD1 WD8 WD15 WD22 WD29 WD36 WD43 WD44 WD50 WD57 WD64
## 1   1     1 240 250  255  260  262  258  266  266  265  272  278
## 2   2     1 225 230  230  232  240  240  243  244  238  247  245
## 3   3     1 245 250  250  255  262  265  267  267  264  268  269
## 4   4     1 260 255  255  265  265  268  270  272  274  273  275
## 5   5     1 255 260  255  270  270  273  274  273  276  278  280
## 6   6     1 260 265  270  275  275  277  278  278  284  279  281
## 7   7     1 275 275  260  270  273  274  276  271  282  281  284
## 8   8     1 245 255  260  268  270  265  265  267  273  274  278
## 9   9     2 410 415  425  428  438  443  442  446  456  468  478
## 10 10     2 405 420  430  440  448  460  458  464  475  484  496
## 11 11     2 445 445  450  452  455  455  451  450  462  466  472
## 12 12     2 555 560  565  580  590  597  595  595  612  618  628
## 13 13     3 470 465  475  485  487  493  493  504  507  518  525
## 14 14     3 535 525  530  533  535  540  525  530  543  544  559
## 15 15     3 520 525  530  540  543  546  538  544  553  555  548
## 16 16     3 510 510  520  515  530  538  535  542  550  553  569
# Convert data to long form
RATSL <- RATS %>%
  gather(key = WD, value = Weight, -ID, -Group) %>%
  mutate(Day = as.integer(substr(WD,3,4))) 

# dataset
print(RATSL, row.names = FALSE)
##  ID Group   WD Weight Day
##   1     1  WD1    240   1
##   2     1  WD1    225   1
##   3     1  WD1    245   1
##   4     1  WD1    260   1
##   5     1  WD1    255   1
##   6     1  WD1    260   1
##   7     1  WD1    275   1
##   8     1  WD1    245   1
##   9     2  WD1    410   1
##  10     2  WD1    405   1
##  11     2  WD1    445   1
##  12     2  WD1    555   1
##  13     3  WD1    470   1
##  14     3  WD1    535   1
##  15     3  WD1    520   1
##  16     3  WD1    510   1
##   1     1  WD8    250   8
##   2     1  WD8    230   8
##   3     1  WD8    250   8
##   4     1  WD8    255   8
##   5     1  WD8    260   8
##   6     1  WD8    265   8
##   7     1  WD8    275   8
##   8     1  WD8    255   8
##   9     2  WD8    415   8
##  10     2  WD8    420   8
##  11     2  WD8    445   8
##  12     2  WD8    560   8
##  13     3  WD8    465   8
##  14     3  WD8    525   8
##  15     3  WD8    525   8
##  16     3  WD8    510   8
##   1     1 WD15    255  15
##   2     1 WD15    230  15
##   3     1 WD15    250  15
##   4     1 WD15    255  15
##   5     1 WD15    255  15
##   6     1 WD15    270  15
##   7     1 WD15    260  15
##   8     1 WD15    260  15
##   9     2 WD15    425  15
##  10     2 WD15    430  15
##  11     2 WD15    450  15
##  12     2 WD15    565  15
##  13     3 WD15    475  15
##  14     3 WD15    530  15
##  15     3 WD15    530  15
##  16     3 WD15    520  15
##   1     1 WD22    260  22
##   2     1 WD22    232  22
##   3     1 WD22    255  22
##   4     1 WD22    265  22
##   5     1 WD22    270  22
##   6     1 WD22    275  22
##   7     1 WD22    270  22
##   8     1 WD22    268  22
##   9     2 WD22    428  22
##  10     2 WD22    440  22
##  11     2 WD22    452  22
##  12     2 WD22    580  22
##  13     3 WD22    485  22
##  14     3 WD22    533  22
##  15     3 WD22    540  22
##  16     3 WD22    515  22
##   1     1 WD29    262  29
##   2     1 WD29    240  29
##   3     1 WD29    262  29
##   4     1 WD29    265  29
##   5     1 WD29    270  29
##   6     1 WD29    275  29
##   7     1 WD29    273  29
##   8     1 WD29    270  29
##   9     2 WD29    438  29
##  10     2 WD29    448  29
##  11     2 WD29    455  29
##  12     2 WD29    590  29
##  13     3 WD29    487  29
##  14     3 WD29    535  29
##  15     3 WD29    543  29
##  16     3 WD29    530  29
##   1     1 WD36    258  36
##   2     1 WD36    240  36
##   3     1 WD36    265  36
##   4     1 WD36    268  36
##   5     1 WD36    273  36
##   6     1 WD36    277  36
##   7     1 WD36    274  36
##   8     1 WD36    265  36
##   9     2 WD36    443  36
##  10     2 WD36    460  36
##  11     2 WD36    455  36
##  12     2 WD36    597  36
##  13     3 WD36    493  36
##  14     3 WD36    540  36
##  15     3 WD36    546  36
##  16     3 WD36    538  36
##   1     1 WD43    266  43
##   2     1 WD43    243  43
##   3     1 WD43    267  43
##   4     1 WD43    270  43
##   5     1 WD43    274  43
##   6     1 WD43    278  43
##   7     1 WD43    276  43
##   8     1 WD43    265  43
##   9     2 WD43    442  43
##  10     2 WD43    458  43
##  11     2 WD43    451  43
##  12     2 WD43    595  43
##  13     3 WD43    493  43
##  14     3 WD43    525  43
##  15     3 WD43    538  43
##  16     3 WD43    535  43
##   1     1 WD44    266  44
##   2     1 WD44    244  44
##   3     1 WD44    267  44
##   4     1 WD44    272  44
##   5     1 WD44    273  44
##   6     1 WD44    278  44
##   7     1 WD44    271  44
##   8     1 WD44    267  44
##   9     2 WD44    446  44
##  10     2 WD44    464  44
##  11     2 WD44    450  44
##  12     2 WD44    595  44
##  13     3 WD44    504  44
##  14     3 WD44    530  44
##  15     3 WD44    544  44
##  16     3 WD44    542  44
##   1     1 WD50    265  50
##   2     1 WD50    238  50
##   3     1 WD50    264  50
##   4     1 WD50    274  50
##   5     1 WD50    276  50
##   6     1 WD50    284  50
##   7     1 WD50    282  50
##   8     1 WD50    273  50
##   9     2 WD50    456  50
##  10     2 WD50    475  50
##  11     2 WD50    462  50
##  12     2 WD50    612  50
##  13     3 WD50    507  50
##  14     3 WD50    543  50
##  15     3 WD50    553  50
##  16     3 WD50    550  50
##   1     1 WD57    272  57
##   2     1 WD57    247  57
##   3     1 WD57    268  57
##   4     1 WD57    273  57
##   5     1 WD57    278  57
##   6     1 WD57    279  57
##   7     1 WD57    281  57
##   8     1 WD57    274  57
##   9     2 WD57    468  57
##  10     2 WD57    484  57
##  11     2 WD57    466  57
##  12     2 WD57    618  57
##  13     3 WD57    518  57
##  14     3 WD57    544  57
##  15     3 WD57    555  57
##  16     3 WD57    553  57
##   1     1 WD64    278  64
##   2     1 WD64    245  64
##   3     1 WD64    269  64
##   4     1 WD64    275  64
##   5     1 WD64    280  64
##   6     1 WD64    281  64
##   7     1 WD64    284  64
##   8     1 WD64    278  64
##   9     2 WD64    478  64
##  10     2 WD64    496  64
##  11     2 WD64    472  64
##  12     2 WD64    628  64
##  13     3 WD64    525  64
##  14     3 WD64    559  64
##  15     3 WD64    548  64
##  16     3 WD64    569  64

Plot of the RATSL data

#Access the package ggplot2
library(ggplot2)

# Glimpse the data
glimpse(RATSL)
## Rows: 176
## Columns: 5
## $ ID     <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2, 3,~
## $ Group  <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, 1, ~
## $ WD     <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", ~
## $ Weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, 470~
## $ Day    <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, 8, ~
# Draw the plot
ggplot(RATSL, aes(x = Day, y = Weight, linetype = ID, color = ID)) +
  geom_line() + geom_point() +
  facet_grid(. ~ Group, labeller = label_both)

From the above plots, we see that each group of rats started with a different initial weight. Group 1 gained the least amount of weight during the study period. Group 2 and group 3 did gain weight at approximately the same rate.

Plots of the RATSL data after standartization

# Standardise the variable RATS
RATSL <- RATSL %>%
  group_by(Day) %>%
  mutate(stdweight = (Weight - mean(Weight))/sd(Weight) ) %>%
  ungroup()

# Glimpse the data
glimpse(RATSL)
## Rows: 176
## Columns: 6
## $ ID        <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,~
## $ Group     <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, ~
## $ WD        <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1~
## $ Weight    <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, ~
## $ Day       <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, ~
## $ stdweight <dbl> -1.0011429, -1.1203857, -0.9613953, -0.8421525, -0.8819001, ~
# Plot again with the standardised RATS
ggplot(RATSL, aes(x = Day, y = stdweight, linetype = ID, color = ID)) +
  geom_line() + geom_point() +
  scale_linetype_manual(values = rep(1:16, Days=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  scale_y_continuous(name = "standardized Weight")

From the above standardized plots, we see that group 1 did not gain or lose weight. We also see that group 2 gained the most weight and group 3 lost weight.

Summary Graph

# Number of weeks, baseline (week 0) included
n <- 9

# Summary data with mean and standard error of RATS by Group and Day
RATSS <- RATSL %>%
  group_by(Group, Day) %>%
  summarise( mean = mean(Weight), se = sd(Weight)/sqrt(n) ) %>%
  ungroup()

# Glimpse the data
glimpse(RATSS)
## Rows: 33
## Columns: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2~
## $ Day   <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 36, ~
## $ mean  <dbl> 250.625, 255.000, 254.375, 261.875, 264.625, 265.000, 267.375, 2~
## $ se    <dbl> 5.073859, 4.364358, 3.825302, 4.533605, 3.685827, 3.927922, 3.65~
# Plot the mean profiles
ggplot(RATSS, aes(x = Day, y = mean, linetype = Group, shape = Group, color = Group)) +
  geom_line() +
  scale_linetype_manual(values = c(1,2,3)) +
  geom_point(size=3) +
  scale_shape_manual(values = c(1,2,3)) +
  geom_errorbar(aes(ymin = mean - se, ymax = mean + se, linetype="1"), width=0.3) +
  theme(legend.position = c(0.8,0.8)) +
  scale_y_continuous(name = "mean(Weight) +/- se(Weight)")

From the summary plot with mean and standard error, we see that group 1 did not gain much weight during the study but group 2 and group 3 did gain weight at a similar rate.

Outliners

# Create a summary data
RATSL8S <- RATSL %>%
  filter(Day > 0) %>%
  group_by(Group, ID) %>%
  summarise( mean=mean(Weight) ) %>%
  ungroup()

# Glimpse the data
glimpse(RATSL8S)
## Rows: 16
## Columns: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3
## $ ID    <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
## $ mean  <dbl> 261.0909, 237.6364, 260.1818, 266.5455, 269.4545, 274.7273, 274.~
# Draw a boxplot of the mean versus treatment
ggplot(RATSL8S, aes(x = Group, y = mean, color = Group)) +
  geom_boxplot(outlier.colour="blue", outlier.shape=16,
             outlier.size=10, notch=FALSE) +
  stat_summary(fun = mean, geom = "point", shape=23, size=10, fill =  "red", colour = "blue") +
  scale_y_continuous(name = "mean(Weight), weeks 1-9")

From the above boxplot, group 2 seems to have more variation than group 1 and 3. As well, we see 3 outliers.

# Add the baseline from the original data as a new variable to the summary data
RATSL8S2 <- RATSL8S %>%
  mutate(baseline = RATS$WD1)

# Fit the linear model with the mean as the response 
fit <- lm(mean ~ baseline + Group, data = RATSL8S2)

# Compute the analysis of variance table for the fitted model with anova()
anova(fit)
## Analysis of Variance Table
## 
## Response: mean
##           Df Sum Sq Mean Sq   F value    Pr(>F)    
## baseline   1 252125  252125 2237.0655 5.217e-15 ***
## Group      2    726     363    3.2219   0.07586 .  
## Residuals 12   1352     113                        
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Given that Pr(>F) is smaller than 0.05, we reject the null hypothesis (all means are equal) and we conclude that at least one group is different in weight than the others.

Part 2: Implement the analyses of Chapter 9 of MABS

# load BPRS data
BPRS <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/BPRS.txt", sep = " ", header = TRUE)

# Data
BPRS
##    treatment subject week0 week1 week2 week3 week4 week5 week6 week7 week8
## 1          1       1    42    36    36    43    41    40    38    47    51
## 2          1       2    58    68    61    55    43    34    28    28    28
## 3          1       3    54    55    41    38    43    28    29    25    24
## 4          1       4    55    77    49    54    56    50    47    42    46
## 5          1       5    72    75    72    65    50    39    32    38    32
## 6          1       6    48    43    41    38    36    29    33    27    25
## 7          1       7    71    61    47    30    27    40    30    31    31
## 8          1       8    30    36    38    38    31    26    26    25    24
## 9          1       9    41    43    39    35    28    22    20    23    21
## 10         1      10    57    51    51    55    53    43    43    39    32
## 11         1      11    30    34    34    41    36    36    38    36    36
## 12         1      12    55    52    49    54    48    43    37    36    31
## 13         1      13    36    32    36    31    25    25    21    19    22
## 14         1      14    38    35    36    34    25    27    25    26    26
## 15         1      15    66    68    65    49    36    32    27    30    37
## 16         1      16    41    35    45    42    31    31    29    26    30
## 17         1      17    45    38    46    38    40    33    27    31    27
## 18         1      18    39    35    27    25    29    28    21    25    20
## 19         1      19    24    28    31    28    29    21    22    23    22
## 20         1      20    38    34    27    25    25    27    21    19    21
## 21         2       1    52    73    42    41    39    38    43    62    50
## 22         2       2    30    23    32    24    20    20    19    18    20
## 23         2       3    65    31    33    28    22    25    24    31    32
## 24         2       4    37    31    27    31    31    26    24    26    23
## 25         2       5    59    67    58    61    49    38    37    36    35
## 26         2       6    30    33    37    33    28    26    27    23    21
## 27         2       7    69    52    41    33    34    37    37    38    35
## 28         2       8    62    54    49    39    55    51    55    59    66
## 29         2       9    38    40    38    27    31    24    22    21    21
## 30         2      10    65    44    31    34    39    34    41    42    39
## 31         2      11    78    95    75    76    66    64    64    60    75
## 32         2      12    38    41    36    27    29    27    21    22    23
## 33         2      13    63    65    60    53    52    32    37    52    28
## 34         2      14    40    37    31    38    35    30    33    30    27
## 35         2      15    40    36    55    55    42    30    26    30    37
## 36         2      16    54    45    35    27    25    22    22    22    22
## 37         2      17    33    41    30    32    46    43    43    43    43
## 38         2      18    28    30    29    33    30    26    36    33    30
## 39         2      19    52    43    26    27    24    32    21    21    21
## 40         2      20    47    36    32    29    25    23    23    23    23
# Factor treatment & subject
BPRS$treatment <- factor(BPRS$treatment)
BPRS$subject <- factor(BPRS$subject)

# Take a glimpse at the BPRSL data
glimpse(BPRS)
## Rows: 40
## Columns: 11
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ~
## $ subject   <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 1~
## $ week0     <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 66, ~
## $ week1     <int> 36, 68, 55, 77, 75, 43, 61, 36, 43, 51, 34, 52, 32, 35, 68, ~
## $ week2     <int> 36, 61, 41, 49, 72, 41, 47, 38, 39, 51, 34, 49, 36, 36, 65, ~
## $ week3     <int> 43, 55, 38, 54, 65, 38, 30, 38, 35, 55, 41, 54, 31, 34, 49, ~
## $ week4     <int> 41, 43, 43, 56, 50, 36, 27, 31, 28, 53, 36, 48, 25, 25, 36, ~
## $ week5     <int> 40, 34, 28, 50, 39, 29, 40, 26, 22, 43, 36, 43, 25, 27, 32, ~
## $ week6     <int> 38, 28, 29, 47, 32, 33, 30, 26, 20, 43, 38, 37, 21, 25, 27, ~
## $ week7     <int> 47, 28, 25, 42, 38, 27, 31, 25, 23, 39, 36, 36, 19, 26, 30, ~
## $ week8     <int> 51, 28, 24, 46, 32, 25, 31, 24, 21, 32, 36, 31, 22, 26, 37, ~
# Convert to long form
BPRSL <-  BPRS %>% gather(key = weeks, value = bprs, -treatment, -subject)

# Extract the week number
BPRSL <-  BPRSL %>% mutate(week = as.integer(substr(weeks,5,5)))

glimpse(BPRSL)
## Rows: 360
## Columns: 5
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ~
## $ subject   <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 1~
## $ weeks     <chr> "week0", "week0", "week0", "week0", "week0", "week0", "week0~
## $ bprs      <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 66, ~
## $ week      <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ~

First Plot

# Check the dimensions of the data
dim(BPRSL)
## [1] 360   5
# Draw the plot
ggplot(BPRSL, aes(x = week, y = bprs, linetype = subject)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:10, times=4)) +
  facet_grid(. ~ treatment, labeller = label_both) +
  theme(legend.position = "none") + 
  scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs)))

From the first plot, we see that there is a downward trend for some participants in treatment 1 and 2. However, for some participants, bprs is increasing in treatment 1 and 2.

The Linear model

# create a regression model BPRS_reg
BPRS_reg <- lm(bprs ~ week + treatment, data = BPRSL)

# print out a summary of the model
summary(BPRS_reg)
## 
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRSL)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.454  -8.965  -3.196   7.002  50.244 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  46.4539     1.3670  33.982   <2e-16 ***
## week         -2.2704     0.2524  -8.995   <2e-16 ***
## treatment2    0.5722     1.3034   0.439    0.661    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared:  0.1851, Adjusted R-squared:  0.1806 
## F-statistic: 40.55 on 2 and 357 DF,  p-value: < 2.2e-16

Here, only week as some significance but treatment2 does not.

The Random Intercept Model

# access library lme4
library(lme4)

# Create a random intercept model
BPRS_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRSL, REML = FALSE)

# Print the summary of the model
summary(BPRS_ref)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2748.7   2768.1  -1369.4   2738.7      355 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0481 -0.6749 -0.1361  0.4813  3.4855 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  subject  (Intercept)  47.41    6.885  
##  Residual             104.21   10.208  
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     1.9090  24.334
## week         -2.2704     0.2084 -10.896
## treatment2    0.5722     1.0761   0.532
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.437       
## treatment2 -0.282  0.000
Fitted <- fitted(BPRS_ref)

ggplot(BPRSL, aes(x = week, y = Fitted, group = subject)) +
  geom_line() +
  scale_x_continuous(name = "week") +
  scale_y_continuous(limits = c(min(BPRSL$bprs), max(BPRSL$bprs))) +
  theme(legend.position = "top") +
  labs(title = "The Random Intercept Model")

Random Intercept and Random Slope Model

# create a random intercept and random slope model
BPRS_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRSL, REML = FALSE)

# print a summary of the model
summary(BPRS_ref1)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2745.4   2772.6  -1365.7   2731.4      353 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.8919 -0.6194 -0.0691  0.5531  3.7976 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.8222  8.0512        
##           week         0.9609  0.9802   -0.51
##  Residual             97.4305  9.8707        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     2.1052  22.066
## week         -2.2704     0.2977  -7.626
## treatment2    0.5722     1.0405   0.550
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.582       
## treatment2 -0.247  0.000
# perform an ANOVA test on the two models
anova(BPRS_ref1, BPRS_ref)
## Data: BPRSL
## Models:
## BPRS_ref: bprs ~ week + treatment + (1 | subject)
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
##           npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## BPRS_ref     5 2748.7 2768.1 -1369.4   2738.7                       
## BPRS_ref1    7 2745.4 2772.6 -1365.7   2731.4 7.2721  2    0.02636 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Fitted <- fitted(BPRS_ref1)

ggplot(BPRSL, aes(x = week, y = Fitted, group = subject)) +
  geom_line() +
  scale_x_continuous(name = "week") +
  scale_y_continuous(name = "BPRS") +
  theme(legend.position = "top") +
  labs(title = "Random Intercept and Random Slope Model")

Here, we see a better fit against the comparison model for BPRS_ref1 with a chi-square of 0.02636.

Random Intercept and Random Slope Model with interaction

# create a random intercept and random slope model
BPRS_ref2 <- lmer(bprs ~ week * treatment + (week | subject), data = BPRSL, REML = FALSE)

# print a summary of the model
summary(BPRS_ref2)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week * treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2744.3   2775.4  -1364.1   2728.3      352 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0512 -0.6271 -0.0768  0.5288  3.9260 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.9964  8.0620        
##           week         0.9687  0.9842   -0.51
##  Residual             96.4707  9.8220        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##                 Estimate Std. Error t value
## (Intercept)      47.8856     2.2521  21.262
## week             -2.6283     0.3589  -7.323
## treatment2       -2.2911     1.9090  -1.200
## week:treatment2   0.7158     0.4010   1.785
## 
## Correlation of Fixed Effects:
##             (Intr) week   trtmn2
## week        -0.650              
## treatment2  -0.424  0.469       
## wek:trtmnt2  0.356 -0.559 -0.840
# perform an ANOVA test on the two models
anova(BPRS_ref2, BPRS_ref1)
## Data: BPRSL
## Models:
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## BPRS_ref2: bprs ~ week * treatment + (week | subject)
##           npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## BPRS_ref1    7 2745.4 2772.6 -1365.7   2731.4                       
## BPRS_ref2    8 2744.3 2775.4 -1364.1   2728.3 3.1712  1    0.07495 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Create a vector of the fitted values
Fitted <- fitted(BPRS_ref2)

# Create a new column fitted to BPRSL
BPRSL2 <- BPRSL %>%
  mutate(Fitted)

ggplot(BPRSL2, aes(x = week, y = Fitted, group = subject)) +
  geom_line() +
  scale_x_continuous(name = "week") +
  scale_y_continuous(name = "BPRS") +
  theme(legend.position = "top") +
  labs(title = "Random Intercept and Random Slope Model with interaction")

Here, we don’t see a better fit so, BPRS_ref1 has the best fit.

Overall, we see a decrease in bprs for all participants.